Output and Evaluation
FEA LEARNING CENTER
Output Requests & Post-Processing
Getting the Data Right
By Joseph P. McFadden Sr.
McFaddenCAE.com
Companion document to the FEA Learning Center
in the Abaqus INP Comprehensive Analyzer
You can build a perfect model.
Mesh it beautifully. Get every material property right. Apply the exact boundary conditions and loads. Run the analysis, wait for the solver to finish, and get a green checkmark that says "completed successfully."
And then lose everything in the last mile — because you didn't capture the right data, or you corrupted the signal on the way out.
Output requests and post-processing are the least glamorous parts of simulation work. Nobody gives a conference talk about their output interval settings. But they're where truth meets the spreadsheet. Where the raw physics in your ODB file either makes it to your SRS calculation intact, or arrives as something subtly wrong — filtered incorrectly, truncated at a step boundary, aliased by an output interval that was too coarse.
And the problem with subtly wrong data is that it produces subtly wrong answers. Your SRS is off by 15 percent and you don't know it. Your jerk calculation is dominated by noise because you differentiated without filtering. Your velocity change is wrong because you exported only one step of a two-step analysis.
This discussion is about doing the last mile right.
The Why — Two Kinds Of Output And What They're For
Anaqus gives you two fundamentally different categories of output: field output and history output. Understanding the difference — and using each one for its intended purpose — saves enormous time and prevents subtle errors.
Field output captures spatial data — stress contours across the entire model, displacement fields, acceleration maps — at a limited number of time points. It's what you use for visualization. Color plots, deformed shape animations, contour maps. It lets you see where stress concentrates, where deformation is largest, how the mode shape looks. But because it's stored for the entire model at each output frame, the file size grows quickly. You can't afford to write field output at every microsecond of an explicit analysis — your ODB file would be terabytes. So you write field output at intervals coarse enough to capture the general response but fine enough to animate smoothly — typically 50 to 200 frames across the analysis.
History output captures temporal data — the complete time history of selected variables at specific points. Acceleration at your accelerometer location. Displacement at a critical corner. Energy variables for the whole model. Because it records data at only a few locations, you can afford much finer time resolution. And that fine time resolution is essential for any downstream DSP — SRS computation, jerk extraction, velocity change, frequency analysis.
The practical guideline: field output is for eyes, history output is for math. If you're going to look at it in CAE Visualization as a color plot, it's field output. If you're going to export it to a text file and run algorithms on it, it's history output.
The What — What To Request And Why
For any explicit dynamic analysis — drop test, impact, shock — there is a minimum set of history outputs that you should always request.
At your sensor locations — the nodes where you want acceleration, velocity, and displacement data — request A1, A2, A3 for acceleration components, V1, V2, V3 for velocity, and U1, U2, U3 for displacement. These are the raw kinematic signals you'll use for SRS, jerk, velocity change, and comparison with physical test data.
Choose these locations deliberately. Where would you mount an accelerometer in a physical test? Put a history output node there. Where is the critical component — the display glass, the PCB center, the battery mounting point? Put a history output node there too. Don't just request output at random nodes. Match your simulation output locations to your test instrumentation plan, because you'll need to compare the two.
For the whole model, always request energy variables: ALLKE for kinetic energy, ALLIE for internal energy, ALLVD for viscous dissipation, ALLFD for frictional dissipation, ALLAE for artificial strain energy from hourglass control, ALLWK for external work, and ETOTAL for the total energy balance.
ETOTAL is your single most important quality metric. It should remain flat throughout the analysis. If it drifts, energy is being created or destroyed artificially, and your results are not trustworthy. ALLAE should stay below five percent of ALLIE — if it's higher, hourglassing is corrupting your results. These energy checks take seconds to perform and can save you weeks of debugging failed correlations.
Sampling Rate And The Nyquist Rule
The time interval you specify for history output determines your effective sampling rate. And your sampling rate determines the highest frequency you can capture without corruption.
The Nyquist theorem says your sampling rate must be at least twice the highest frequency of interest. In practice, use five to ten times — not just two — because two times is the mathematical minimum and gives you terrible signal reconstruction.
For a drop test where you care about structural response up to 10,000 Hertz, a history output interval of 1 times 10 to the minus 5 seconds gives you a sampling rate of 100,000 Hertz — ten times oversampling. That's a solid default for most component-level impact work.
For pyroshock — interest to 50,000 Hertz — you need an output interval of about 2 times 10 to the minus 6 seconds, giving 500,000 Hertz sampling. For low-frequency seismic analysis — interest to 50 Hertz — an interval of 1 times 10 to the minus 3 is more than adequate.
Don't confuse the output interval with the solver time increment. In explicit dynamics, the solver computes at every Courant-limited time step — which can be sub-microsecond for fine meshes. But it only writes history output at the interval you specify. Between output points, the solver interpolates to your requested time. Your output interval should be much larger than the solver increment but small enough to satisfy Nyquist for your frequency range of interest.
Setting the interval too coarse aliases high-frequency content — real signal energy above the Nyquist frequency folds back into your data as phantom low-frequency artifacts. That corrupts your SRS, your jerk, everything. Setting it too fine creates enormous output files that slow down every subsequent operation. The right interval is a deliberate engineering choice, not a default.
The Critical Step — Exporting All Steps
This is the single most common mistake in Anaqus post-processing, and I need to give it its own section because it causes real problems in real projects.
Many analyses have multiple steps. A gravity preload step followed by a drop step. A free-fall step followed by a contact step followed by a rebound step. An initial velocity step followed by a deceleration step.
When you create XY data in Anaqus CAE — selecting history output to plot or export — the software defaults to the currently active step. If you don't deliberately select all steps, you get a partial time history that begins at zero for only the currently active step.
Here is the procedure. And it must become muscle memory.
Open your ODB in Visualization. Go to Tools, XY Data, Create. Select ODB history output. Choose your variable — say A2 at your accelerometer node. Now — before you click anything else — go to the Steps/Frames tab. By default, only the current step may be checked. You must check every step in the list. Every single one. This gives you the complete, continuous time history that spans the entire analysis from first step to last.
Without this step, your exported time history starts in the middle of the event. Your SRS is computed from incomplete data. Your velocity change is wrong because you missed the initial velocity. Your jerk trace starts at the wrong time. And the results look plausible — there's no obvious error flag — they're just wrong.
After you create the XY data with all steps selected, verify it. Plot it. Does the time axis start at the beginning of the first step? Is time continuous across step boundaries — no jumps, no resets to zero? Does the total time span match your expected analysis duration? Does the signal make physical sense — does the acceleration start at a reasonable value, peak during the impact, and ring down afterward?
Only after you've verified the data in CAE should you export it. File, Report, XY Data. Choose report format — tab-delimited text is the most universal. Select all data sets you want. Save. Then open the file and verify again — time in column one, value in column two, continuous, complete.
The Post-processing Pipeline
Raw exported data is not ready for SRS, jerk, or comparison with test data. It must go through a processing pipeline. Every step in this pipeline matters, and skipping any one of them introduces errors that propagate into every derived quantity.
Step one — verify raw data. Does the time span cover the full event? Is time monotonically increasing? Are there NaN values, infinities, or suspicious spikes? Does the peak match what you see in CAE Visualization? If any check fails, stop. Go back to the export step.
Step two — interpolate to a constant time step. Even though you specified a constant output interval, the actual exported data may have slight variations at step boundaries or where the solver adjusted timing. DSP operations — FFT, digital filtering, SRS computation — require a strictly uniform time base. Use linear or cubic spline interpolation to resample onto a perfectly uniform grid with a time step equal to or slightly finer than your original interval.
Step three — remove DC offset and trends. Subtract the mean of the pre-event quiet period. If there's a linear drift, detrend it. For simulation data, the offset is usually small but not zero. Even a tiny DC offset integrates into a growing velocity drift that corrupts velocity change calculations.
Step four — filter appropriately for your analysis goal.
For SRS computation, generally no additional filtering is needed — the SRS algorithm itself acts as a filter bank. But if the signal has obvious numerical noise from contact chatter or hourglassing, a gentle low-pass filter above your SRS frequency range removes it without affecting the result.
For jerk computation, you must filter before differentiating. Butterworth low-pass, fourth order, zero-phase. Cutoff frequency of 5,000 to 10,000 Hertz for drop test work. Differentiate the filtered signal, never the raw signal.
For comparison with physical test data, apply the same filter that was used on the physical measurement. CFC filters per SAE J211 are the standard for impact testing. CFC 60 has a cutoff near 100 Hertz — gross body motion. CFC 180 is about 300 Hertz — structural response. CFC 600 is about 1,000 Hertz — component response. CFC 1000 is about 1,650 Hertz — local response. The critical rule: both simulation and test data must be filtered with the same filter before comparison. Unfiltered simulation versus CFC-filtered test data is not a valid comparison.
Step five — decimate if needed. If your data was sampled at a very high rate and you only need content up to a much lower frequency, you can reduce the data volume by decimation — keeping every Nth sample. But you must apply an anti-aliasing low-pass filter before decimating. The cutoff must be at the new Nyquist frequency — half the new sampling rate. Decimating without filtering first causes aliasing — high-frequency content folds back into the retained frequency band as phantom artifacts. The scipy decimate function handles this automatically, applying the anti-alias filter for you.
Step six — compute derived quantities.
Velocity from acceleration: integrate using the trapezoidal rule. For drop test, the initial velocity is the impact velocity — square root of two times gravity times drop height. Verify that the final velocity makes physical sense — near zero for a fully captured impact, or a rebound velocity for an elastic event.
Velocity change: the difference between maximum and minimum velocity during the event.
Jerk: differentiate the filtered acceleration. Extract peak magnitude, duration above threshold, and jerk impulse.
SRS: compute from the clean, uniformly sampled acceleration signal using a validated algorithm. The Smallwood ramp-invariant method is the standard. Specify Q factor — Q of 10 for general use. Report the Q factor with your results.
Post-processing Inside Anaqus Cae
You don't always need to export and process externally. Anaqus CAE has built-in XY data operations that handle several common post-processing tasks. They're not as flexible as external DSP tools, but they're fast and convenient for quick checks.
Access them through Tools, XY Data, Create, Operate on XY Data.
The butterworth function applies a low-pass Butterworth filter with a specified cutoff frequency. Good for general smoothing.
The saeFilter function applies an SAE J211 CFC filter. This is the one you want for drop test work. Specify the CFC class — 60, 180, 600, or 1000. It applies the standard four-pole Butterworth filter at the corresponding cutoff frequency. Use this when comparing simulation to CFC-filtered test data.
The differentiate function computes the time derivative. Use it on filtered acceleration to get jerk — but filter first using the butterworth or saeFilter function. The integrate function computes the time integral — use it on acceleration to get velocity, or double-integrate to get displacement.
Mathematical operations let you combine signals — add, subtract, multiply. Compute the resultant acceleration magnitude from the square root of the sum of the squares of the three components. Compare signals by subtracting one from another and checking the residual.
These built-in operations are excellent for quick sanity checks during the analysis. Does the filtered acceleration look clean? Does the integrated velocity start at the expected impact velocity? Does the resultant magnitude match the individual component peaks? Once you've validated the data in CAE, export the verified version for detailed external processing.
Perturbation Output — A Different Kind Of Data
Everything I've discussed so far applies to explicit dynamic analysis — time-domain simulations where the output is a signal that evolves over time. Drop tests, impacts, shocks. The data is a waveform. The post-processing is signal processing.
But a large part of this Learning Center series covers perturbation procedures — modal analysis, harmonic response, random vibration, and SRS. And these produce fundamentally different kinds of output. If you approach perturbation results with time-domain thinking, you'll misread them. The output type changes with the analysis type, and reading each one correctly requires knowing what kind of answer you're looking at.
Start with modal analysis. The output is a table of natural frequencies, mode shapes, and participation factors. There is no time history. There is no stress field in the traditional sense. The mode shapes are relative displacement patterns — they show you where the structure moves and in what ratio, but not how much. The absolute scaling is arbitrary.
The critical post-processing check for modal analysis is effective mass. For each mode, the solver reports what fraction of the total model mass participates in that mode in each direction. Sum those fractions across all extracted modes. If the cumulative effective mass in any excitation direction is below 90 percent of the total mass, you haven't extracted enough modes. The missing mass means there are structural responses that your subsequent perturbation analyses — harmonic, random, SRS — will not capture. This is the modal equivalent of the ETOTAL energy check in explicit dynamics. If it fails, nothing downstream is trustworthy.
Also check the frequency range. If your PSD input goes to 2,000 Hertz, extract modes well beyond that — 3,000 to 4,000 Hertz. Modes near the upper boundary of your extraction range may have artificially distorted shapes because the solver was told to stop looking. Including modes beyond your excitation range ensures that the modes within it are accurate.
Harmonic response output is frequency-domain and deterministic. At each frequency step, you get the peak amplitude and phase of the sinusoidal response. The amplitude is the actual peak stress, displacement, or acceleration per cycle at that excitation frequency. No sigma multiplier. No statistical interpretation. If the result says 40 Megapascals at 150 Hertz, the stress oscillates between plus and minus 40 Megapascals every cycle when the structure is excited at that frequency.
The key post-processing product is the frequency response function — amplitude versus frequency at a specific node. Plot it. The peaks are resonances. The valleys are anti-resonances. The height of each peak is controlled by damping — sharp and tall for lightly damped modes, broad and lower for heavily damped ones.
You can extract Q directly from the frequency response function. Find the peak amplitude. Drop 3 dB below the peak — that's the peak value divided by the square root of two. Find the two frequencies on either side of the peak where the amplitude equals that value. The difference between those two frequencies is the half-power bandwidth. Divide the resonant frequency by the half-power bandwidth and you have Q. If you have test data, overlay the measured and analytical frequency response functions. Agreement in peak location validates your modal frequencies. Agreement in peak height validates your damping. Discrepancies tell you exactly what's wrong and which direction to correct.
One subtlety: harmonic output includes phase. The phase tells you the timing relationship between the response and the input. Below resonance, the phase is near zero — the response is in sync with the input. At resonance, the phase passes through 90 degrees. Above resonance, it approaches 180 degrees — the response is nearly opposite to the input. If you're combining harmonic responses from multiple excitation sources or comparing response at two different locations, the phase matters. Two stresses of 40 Megapascals that are 180 degrees out of phase cancel. Two that are in phase add to 80. Ignoring phase and just adding magnitudes gives conservative but potentially misleading results.
Random vibration output is statistical. This is the most important distinction in all of FEA post-processing, and it's the one that catches the most engineers.
The stress contour plot from a random vibration analysis shows the RMS stress field — the root-mean-square of the fluctuating stress response to the broadband PSD input. As we covered in detail in the random vibration discussion, for zero-mean vibration this RMS value equals the standard deviation of the stress distribution at each point. It is not a peak. It is not an average of peaks. It is a statistical spread.
The contour is a one-sigma probability envelope. It tells you where the fluctuations are most intense — red for large standard deviation, blue for small — but it does not represent a stress state that the part ever actually experiences at any instant. The part never looks like that contour plot. At any given moment, the actual stress distribution is different — some locations higher, some lower, some opposite in sign.
The design stress is the RMS value multiplied by your sigma factor. For standard structural qualification, that factor is three. Three-sigma means 99.73 percent of the time the instantaneous stress stays within this range. When you look at an RMS stress contour showing 80 Megapascals at a critical location, your design stress is 240 Megapascals. If your yield strength is 250 Megapascals, you do not have a comfortable margin — you have a 10-Megapascal margin that vanishes if the damping is slightly lower than assumed or the input is slightly higher than specified.
For random vibration, Anaqus can also output the response PSD at specific nodes — the frequency-domain decomposition of the response. This is valuable because it tells you which modes are contributing most to the RMS response. If one resonance peak dominates the response PSD, that mode is driving the stress. Stiffen the structure to shift that resonance away from the input energy, or add damping to reduce the amplification, and the RMS stress drops. Without the response PSD, you know the total RMS but not which mode to blame.
SRS output from a perturbation-based response spectrum analysis is yet another format. The output is a set of peak response values — one per mode — combined using a statistical rule like SRSS or absolute sum. The result is a single stress field, a single displacement field. No time history. No frequency sweep. Just the combined peak response. The combination rule matters — SRSS assumes the modal peaks don't occur simultaneously and is less conservative. Absolute sum assumes they all peak together and is more conservative. For well-separated modes, SRSS is standard. For closely-spaced modes, NRL grouping or absolute sum may be more appropriate. Know which rule was used and why.
The Common Thread — Know What Kind Of Answer You Have
Time-domain output from explicit dynamics: the number is the actual value at a specific instant. Filter it, integrate it, differentiate it, compute derived quantities. The post-processing is signal processing.
Frequency-domain output from harmonic response: the number is the peak amplitude at a specific excitation frequency. Plot the frequency response function, extract Q, compare to test data. The post-processing is transfer function analysis.
Statistical output from random vibration: the number is the standard deviation of a probability distribution. Multiply by three for the design value. The contour is a probability envelope, not a stress snapshot. The post-processing is statistical interpretation.
Combined peak output from SRS response spectrum: the number is a statistically combined peak across modes. Know your combination rule. The post-processing is understanding what conservatism that rule introduces.
Modal output: the numbers are relative patterns and mass fractions. Check effective mass coverage. Verify frequency range. The post-processing is validation that the foundation is complete before building anything on top of it.
Getting the analysis right matters. Getting the output right matters equally. Because the output is what you report, what you compare to test data, what you base design decisions on. A perfect solver run that produces misinterpreted results is worse than no analysis at all — because it gives you false confidence.
Why We Simulate
This is the last discussion in this Learning Center series. And if you've listened to all of them — from modal analysis through shock, SRS, random vibration, harmonic response, perturbation limitations, jerk and fragility, and now output and post-processing — you've taken a journey through an entire philosophy of how we understand physical systems.
I want to step back from the technical details and talk about what all of this is really for.
We simulate because we want to understand. Not just predict — understand. There's a difference. A prediction tells you a number. Understanding tells you why. A prediction says the stress at this location is 180 Megapascals. Understanding says the stress is 180 Megapascals because the third bending mode concentrates strain energy at that fillet, the input PSD has significant energy at that mode's natural frequency, the damping is low enough that the amplification factor is 25, and the geometry creates a stress concentration factor of 3.2. The prediction tells you whether to worry. The understanding tells you what to change.
That understanding comes from poking the system and listening to how it responds. This is the thread that runs through every discussion in this series.
Modal analysis is the first poke — the gentlest one. You're not loading the structure. You're asking it to reveal its natural character. Where does it want to vibrate? At what frequencies? In what shapes? The mode shapes and natural frequencies are the system's fingerprint. They don't depend on the loading. They don't depend on the environment. They are intrinsic — properties of the mass and stiffness distribution alone. Every analysis that follows uses these as its foundation.
Harmonic response is the methodical poke. One frequency at a time, across the full range. You're having a one-on-one conversation with the structure at every frequency in the band. The frequency response function that emerges is the transfer function — the complete map of how the structure amplifies and attenuates. It's the system's dynamic personality, laid out for you to read.
Random vibration is the realistic poke. All frequencies at once, the way the real world delivers them. Broadband noise, shaped by the PSD of the actual environment — the truck, the forklift, the conveyor, the launch vehicle. The response is statistical because the input is statistical. The output tells you the probability of the stress being within a range, not the stress at a single instant. It's a different kind of answer — and reading it correctly requires understanding the statistics, not just the mechanics.
Shock and SRS are the violent poke. A transient event — brief, intense, over quickly. The SRS collapses the time history into a frequency-domain envelope of peak response, telling you the damage potential across frequency in a single curve. Drop test simulation captures the full nonlinear physics — contact, material behavior, large deformation — that the perturbation procedures assume away.
And output and post-processing — this discussion — is about making sure the system's answer reaches you intact. Clean data. Correct sampling. Proper filtering. Appropriate interpretation for the type of analysis. The bridge between what the solver computed and what you understand.
Every one of these tools exists because a single simulation type cannot capture everything. Static analysis misses dynamics. Modal analysis misses loading. Harmonic misses broadband. Random misses transients. Shock misses steady-state. Each tool illuminates one aspect of the system's behavior. Together, they give you the complete picture — or as close to complete as current methods allow.
The engineers who use these tools most effectively are the ones who understand what each analysis is asking, what kind of answer it produces, and what it cannot tell you. They know that a random vibration RMS contour is not a stress map. They know that a harmonic FRF at low input amplitude may underpredict damping at high amplitude. They know that an SRS loses time information. They know that the perturbation family shares a mathematical foundation — and shares its limitations.
And they know that the real world doesn't care which analysis type you used. The real world delivers all of its loads simultaneously — broadband noise, tonal excitation, transient shocks, thermal gradients, static preloads — and the structure must survive all of them. Simulation lets you decompose that complexity into manageable pieces, understand each piece, and then synthesize the understanding into design decisions that make the product survive.
That's what this series has been about. Not software procedures. Not button clicks. Not which menu to use in CAE. It's about understanding systems — by choosing the right poke, listening to the response, interpreting it correctly, and using that understanding to make something better than it was before.
DSP tools for the complete post-processing pipeline — interpolation, filtering, decimation, SRS, jerk, velocity change, and multi-parameter fragility assessment from exported Anaqus report files — are available at McFaddenCAE.com.
For the foundational principles that make all of this work — energy balance, mass scaling, hourglass control, and the discipline of keeping the simulation honest — see the FEA Best Practices audiobook Volume 4: Keeping the Simulation Honest, at McFaddenCAE.com.
This has been a Learning Center discussion on output requests and post-processing. And with that, this Learning Center series is complete. I'm Joe McFadden. Thanks for listening to all of it.
About the Author
Joseph P. McFadden Sr. is a CAE engineer specializing in finite element analysis, modal analysis, materials behavior, and injection mold tooling validation. With nearly four decades of experience in structural simulation, he brings a holistic perspective to engineering education — connecting how systems respond to how people think and learn.
His work at McFaddenCAE.com includes the Abaqus INP Comprehensive Analyzer — a desktop tool for analyzing, visualizing, and extracting sub-assemblies from large FEA models without requiring an Abaqus license — along with DSP tools for SRS computation, jerk extraction, velocity change analysis, and energy balance verification.
The FEA Learning Center is an integrated educational platform within the Analyzer, providing guided discussions on structural dynamics topics with working example INP files. This document series is the companion written reference for those discussions.
The four-volume FEA Best Practices audiobook series — Building the Model, The System's Natural Character, When Things Collide, and Keeping the Simulation Honest — is available at McFaddenCAE.com.